Chapter 8 – Theoretical Biophysics 335
(8.28)
U
q q
C r
R
C
R
R
ij
i
j
rf ij
ij
rf
rf
RF
0
rf
=
=
−
−
(
)
+
(
) −
(
∑4
2
2
1
1
2
3
1
2
2
πε ε
ε
ε
κ
ε
κ
)
−
(
)
+
(
)+
(
)
2
1
2
2
2
2
1
ε
ε
κ
ε
κ
R
R
rf
rf
where
ε1 and ε2 are the relative dielectric permittivity values inside and outside the cutoff radius,
respectively
κ is the reciprocal of the Debye screening length of the medium outside the cutoff radius
The software application NAMD (Nanoscale Molecular Dynamics, developed at the
University of Illinois at Urbana-Champaign, United States) has also recently emerged as
a valuable molecular simulation tool. NAMD benefits from using a popular interactive
molecular graphics rendering program called “visual molecular dynamics” (VMD) for simu
lation initialization and analysis, but its scripting is also compatible with other MD soft
ware applications including AMBER and CHARMM and was designed to be very efficient
at scaling simulations to hundreds of processors on high-end parallel computing platforms,
incorporating Ewald summation methods. The standard force field used is essentially that of
Equation 8.14. Its chief popularity has been for SMD simulations, for which the setup of the
steering conditions are made intuitive through VMD.
Other popular packages include those dedicated to more niche simulation applications,
some commercial but many homegrown from academic simulation research communities,
for example, ab initio simulations (e.g., CASTEP, ONETEP, NWChem, TeraChem, VASP),
CG approaches (e.g., LAMMPS, RedMD), and mesoscale modeling (e.g., Culgi). CG tools
of oxDNA and oxRNA have utility in mechanical and topological CG simulations of nucleic
acids. Others for specific docking processes can be simulated (e.g., ICM, SCIGRESS) and
molecular design (e.g., Discovery studio, TINKER) and folding (e.g., Abalone, fold.it, FoldX),
with various valuable molecular visualization tools available (e.g., Desmond, VMD).
The physical computation time for MD simulations can take anything from a few minutes
for a simple biomolecule containing ~100 atoms up to several weeks for the most complex
simulations of systems that include up to ~10,000 atoms. Computational time has improved
dramatically over the past decade however due to four key developments:
1 CPUs have got faster, with improvements to miniaturizing the effective minimum
size of a single transistor in a CPU (~40 nm, at the time of writing) enabling signifi
cantly more processing power. Today, CPUs typically contain multiple cores. A core
is an independent processing unit of a CPU, and even entry-level PCs and laptops
use a CPU with two cores (dual-core processors). More advanced computers contain
4–8 core processors, and the current maximum number of cores for any processor is
16. This increase in CPU complexity is broadly consistent with Moore’s law (Moore,
1965). Moore’s law was an interpolation of reduction of transistor length scale with
time, made by Gordon E. Moore, the cofounder of Intel, in which the area density of
transistors in densely packed integrated circuits would continue to roughly double
every two years. The maximum size of a CPU is limited by heat dissipation consider
ations (but this again has been improved recently by using CPU circulating cooling
fluids instead air cooling). The atomic spacing of crystalline silicon is ~0.5 nm that
may appear to place an ultimate limit on transistor miniaturization; however, there
is increasing evidence that new transistor technology based on smaller length scale
quantum tunneling may push the size down even further.
2 Improvements in simulation software have enabled far greater parallelizability to
simulations, especially to route parallel aspects of simulations to different cores of
the same CPU, and/or to different CPUs in a network cluster of computers. There
is a limit, however, to how much of a simulation can be efficiently parallelized, since